5 research outputs found
Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples
Embeddings mapping high-dimensional discrete input to lower-dimensional
continuous vector spaces have been widely adopted in machine learning
applications as a way to capture domain semantics. Interviewing 13 embedding
users across disciplines, we find comparing embeddings is a key task for
deployment or downstream analysis but unfolds in a tedious fashion that poorly
supports systematic exploration. In response, we present the Embedding
Comparator, an interactive system that presents a global comparison of
embedding spaces alongside fine-grained inspection of local neighborhoods. It
systematically surfaces points of comparison by computing the similarity of the
-nearest neighbors of every embedded object between a pair of spaces.
Through case studies, we demonstrate our system rapidly reveals insights, such
as semantic changes following fine-tuning, language changes over time, and
differences between seemingly similar models. In evaluations with 15
participants, we find our system accelerates comparisons by shifting from
laborious manual specification to browsing and manipulating visualizations.Comment: Equal contribution by first two author
VisText: A Benchmark for Semantically Rich Chart Captioning
Captions that describe or explain charts help improve recall and
comprehension of the depicted data and provide a more accessible medium for
people with visual disabilities. However, current approaches for automatically
generating such captions struggle to articulate the perceptual or cognitive
features that are the hallmark of charts (e.g., complex trends and patterns).
In response, we introduce VisText: a dataset of 12,441 pairs of charts and
captions that describe the charts' construction, report key statistics, and
identify perceptual and cognitive phenomena. In VisText, a chart is available
as three representations: a rasterized image, a backing data table, and a scene
graph -- a hierarchical representation of a chart's visual elements akin to a
web page's Document Object Model (DOM). To evaluate the impact of VisText, we
fine-tune state-of-the-art language models on our chart captioning task and
apply prefix-tuning to produce captions that vary the semantic content they
convey. Our models generate coherent, semantically rich captions and perform on
par with state-of-the-art chart captioning models across machine translation
and text generation metrics. Through qualitative analysis, we identify six
broad categories of errors that our models make that can inform future work.Comment: Published at ACL 2023, 29 pages, 10 figure
Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Saliency methods calculate how important each input feature is to a machine
learning model's prediction, and are commonly used to understand model
reasoning. "Faithfulness", or how fully and accurately the saliency output
reflects the underlying model, is an oft-cited desideratum for these methods.
However, explanation methods must necessarily sacrifice certain information in
service of user-oriented goals such as simplicity. To that end, and akin to
performance metrics, we frame saliency methods as abstractions: individual
tools that provide insight into specific aspects of model behavior and entail
tradeoffs. Using this framing, we describe a framework of nine dimensions to
characterize and compare the properties of saliency methods. We group these
dimensions into three categories that map to different phases of the
interpretation process: methodology, or how the saliency is calculated;
sensitivity, or relationships between the saliency result and the underlying
model or input; and, perceptibility, or how a user interprets the result. As we
show, these dimensions give us a granular vocabulary for describing and
comparing saliency methods -- for instance, allowing us to develop "saliency
cards" as a form of documentation, or helping downstream users understand
tradeoffs and choose a method for a particular use case. Moreover, by situating
existing saliency methods within this framework, we identify opportunities for
future work, including filling gaps in the landscape and developing new
evaluation metrics.Comment: 13 pages, 5 figures, 2 table